SAN FRANCISCO — AI developer Anthropic confirmed on April 22, 2026, that it is investigating reports of unauthorized access to its unreleased Claude Mythos Preview model. The investigation follows reports that a small group of users gained entry to the model through a third-party contractor environment on the same day the company publicly announced the model's existence. Anthropic stated that it has found no evidence that the incident impacted its core infrastructure or extended beyond the specific vendor environment in question.
Claude Mythos Preview is a specialized, compute-intensive large language model designed for advanced cybersecurity tasks. According to technical documentation and official statements, the model is capable of autonomously discovering thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser. In internal testing, Mythos demonstrated the ability to chain together multiple software bugs to escape both renderer and operating system sandboxes, a task that typically requires months of effort from human experts. The UK’s AI Security Institute (AISI) recently reported that Mythos was the first model to successfully complete a 32-step simulated cyber-attack, solving the challenge in 30% of its attempts.
Due to these capabilities, Anthropic has withheld Mythos from general release, categorizing it as a dual-use technology that poses significant risks if misused. The model is currently restricted to a curated consortium of approximately 40 elite technology and financial partners under an initiative known as Project Glasswing. Authorized partners, including Apple, Amazon, Microsoft, Google, NVIDIA, and Goldman Sachs, use the model to identify and patch critical software flaws before they can be exploited by hostile actors.
Initial reports indicate the unauthorized access was achieved by a small group of individuals communicating via a private Discord channel dedicated to tracking unreleased AI models. The group reportedly leveraged a combination of methods, including guessing the model’s URL based on Anthropic’s established formatting conventions and utilizing information from a prior data breach at the AI training startup Mercor. At least one individual involved was reportedly employed by a third-party contractor working with Anthropic, facilitating the breach through their existing credentials. While the group reportedly provided demonstrations of the model to media outlets, they claimed their interest was limited to experimentation rather than offensive exploitation.
Government authorities have expressed concern over the breach. Kanishka Narayan, the UK’s AI Minister, stated that businesses should be alert to the model’s ability to rapidly surface systemic IT flaws. Anthropic maintains that its investigation is ongoing and that it is working to fortify third-party access controls to prevent future lapses in its supply chain security.