On April 22, 2026, Anthropic confirmed it is investigating a potential security breach of its unreleased frontier AI model, Claude Mythos Preview. The investigation was launched following reports that a small group of unauthorized users gained access to the model through a third-party vendor environment. Mythos, which Anthropic has characterized as a watershed moment for global cybersecurity, has been withheld from general public release due to its unprecedented capacity to identify and exploit zero-day vulnerabilities autonomously.
Technical specifications of Mythos indicate a substantial advancement in long-context reasoning and software engineering over the previous Claude 4.6 Opus model. Anthropic’s internal red-teaming documented that the model successfully identified thousands of high-severity vulnerabilities across every major operating system and web browser. Specific technical highlights include the discovery of a 27-year-old remote crash vulnerability in the OpenBSD kernel and a 16-year-old flaw in the FFmpeg multimedia framework. According to Anthropic, the model can chain multiple complex memory corruption bugs to achieve full system compromise, often through simple natural language prompts that require no specialized security expertise from the user.
In response to these capabilities, Anthropic established Project Glasswing, a gated consortium of approximately 50 organizations tasked with using Mythos for defensive hardening. The group includes major technology and financial entities such as Amazon Web Services, Apple, Google, Microsoft, NVIDIA, Cisco, Palo Alto Networks, and JPMorgan Chase. Anthropic has pledged 100 million dollars in usage credits to these partners and 4 million dollars in direct donations to open-source security organizations. Despite these safeguards, Anthropic acknowledged on Wednesday that it is investigating reports of unauthorized access to the model via a private online forum, though the company stated it has not yet detected compromises to its primary internal systems.
The exclusive structure of Project Glasswing has drawn significant antitrust scrutiny from international regulators. Critics and smaller technology firms argue that by limiting access to a superhacker class of AI to a select group of market incumbents, Anthropic is creating a tiered security landscape. This invite-only approach is being examined for potential anti-competitive effects, as consortium members gain a first-mover advantage in patching their own systems while the broader software ecosystem remains unaware of the thousands of vulnerabilities Mythos has identified. Regulators are specifically investigating whether the withholding of these findings from the general public, combined with the concentration of the tool among industry leaders, constitutes an unfair competitive advantage in the cybersecurity and cloud services sectors.