The White House announced on May 4, 2026, the establishment of the AI Model Safety and Oversight Working Group, a new interagency body designed to implement a formal federal review process for the next generation of artificial intelligence systems. This initiative represents a significant expansion of the regulatory framework established by the 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The primary objective of the working group is to ensure that frontier AI models—defined by their computational power and potential for dual-use capabilities—undergo rigorous safety evaluations before they are released to the public or integrated into critical infrastructure.
According to a fact sheet released by the Office of Science and Technology Policy, the new oversight framework will apply to models trained using more than 10^26 integer or floating-point operations. Developers of these high-compute models will be required to submit detailed reports on their internal red-teaming exercises, safety mitigation strategies, and cybersecurity protocols to the U.S. AI Safety Institute. The working group, which includes senior officials from the Department of Commerce, the Department of Energy, and the Department of Homeland Security, will have the authority to request additional testing or delay deployment if a model is found to pose substantial risks to national security, public health, or economic stability.
The May 4 announcement also detailed the specific technical benchmarks that the working group will utilize to assess model safety. These include evaluations for chemical, biological, radiological, and nuclear risks, as well as the model's ability to facilitate autonomous cyberattacks or engage in deceptive behavior. The National Institute of Standards and Technology has been tasked with updating its AI Risk Management Framework to include these new pre-deployment standards. Officials stated that the goal is to create a standardized safety certificate that developers must obtain before moving from the testing phase to a broad commercial release.
In a statement accompanying the announcement, Secretary of Commerce Gina Raimondo emphasized that the federal review process is intended to foster innovation by providing a clear set of safety guidelines. The administration confirmed that it has already begun consultations with major AI labs and cloud service providers to ensure the technical feasibility of the reporting requirements. These requirements include the disclosure of hardware clusters used for training and the implementation of robust watermarking for AI-generated content. The working group is expected to release its first set of formal regulatory recommendations by the end of the current fiscal quarter, with mandatory compliance for new model starts beginning in late 2026.
The White House also clarified that the oversight will extend to open-source models that meet the specified compute thresholds. While the administration expressed support for the open-source ecosystem, the new policy mandates that any model capable of significant self-improvement or autonomous capability must undergo the same federal scrutiny as proprietary systems. This move follows a series of closed-door meetings with industry leaders and academic experts throughout the spring of 2026 regarding the escalating risks of unaligned AI systems.