Anthropic has not been shy in expressing concern about its latest AI tool. Bloomberg
Anthropic has not been shy in expressing concern about its latest AI tool. Bloomberg
Anthropic has not been shy in expressing concern about its latest AI tool. Bloomberg
Anthropic has not been shy in expressing concern about its latest AI tool. Bloomberg

What is Mythos: Anthropic's new AI model worries many experts


Cody Combs
Add as a preferred source on Google
  • Play/Pause English
  • Play/Pause Arabic
Bookmark

To say that Anthropic’s largely unreleased Mythos AI model has caused a stir would be a vast understatement, with the technology showing it could have a major effect on cybersecurity.

Never before has a technology tool used and seen by so few people caused so much concern among cybersecurity experts, government officials and Big Tech executives.

AI is often the focus of hype, and rarely a day goes by in which a purportedly game-changing innovation is announced.

But Claude Mythos has struck a particular chord. Anthropic says that, due to its advanced cybersecurity capabilities and potential to expeditiously exploit software vulnerabilities, it is too dangerous for general release. Instead, it is limiting access to a small group of vetted partners through Project Glasswing.

Anthropic has described Mythos as 'too dangerous' for public use. Bloomberg
Anthropic has described Mythos as 'too dangerous' for public use. Bloomberg

“We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity,” Anthropic says on its website.

The company said that it had allowed certain employees of Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia and other companies to gain access to Mythos in the hope of preparing them to address cybersecurity vulnerabilities.

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” the Anthropic website reads.

“The fallout for economies, public safety and national security could be severe … Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.”

Media reports have emerged on meetings between Anthropic executives and US federal officials, with the potential ramifications of Mythos as the focus.

The Bank of England has said it is preparing to stem any vulnerabilities Mythos might discover.

In the long term, Anthropic hopes Mythos can be viewed as a win for cybersecurity, helping to quickly identify potential flaws that might have otherwise eluded even the most seasoned experts.

But in the short term, the company is acknowledging there will be bumps along the way, especially if Mythos or similar tools fall into the wrong hands.

The prevailing view in the weeks since Project Glasswing and Mythos were announced has been that the AI developer is so impressed by – and worried about – the capabilities of its latest tool that it must keep it under wraps and allow experts to prepare for the worst-case scenarios.

Not everybody is convinced, however.

US President Donald Trump's former AI and cryptocurrency adviser David Sacks has speculated that Anthropic's motives might not be as pure as advertised.

In a social media post, Mr Sacks speculated that Anthropic's true motive for not releasing Mythos to the public stemmed from the company's inability to provide the computing power to support it.

“And then by holding it back, they create this impression of scarcity and altruism, and it turns into this gigantic marketing event for their product, because everyone in the government's like, 'Oh wow, they're holding it back because it's so amazing,'” Mr Sacks wrote.

He later gave Anthropic the benefit of the doubt in terms of concerns about cybersecurity.

Worst fears realised

Bloomberg recently reported that some of Anthropic's worst fears about the technology falling into the hands of nefarious actors have already been realised. The report quoted a source as saying that a small group of unauthorised users have accessed Mythos.

Cybersecurity experts have for years warned that powerful AI models, combined with increasingly robust quantum computers, might be able to break encryption defences, guess passwords and expose sensitive data.

This scenario has often been referred to as “Q-Day”, Mohammed Aboul-Magd, vice president of product at SandboxAQ, told The National in December.

“So much encryption is effectively at risk of being broken,” he warned.

Anthony Aguirre, chief executive of the Future of Life Institute, a non-profit promoting the idea of steering “transformative technology towards benefiting life and away from extreme large-scale risks”, echoed the concerns about Mythos.

“I think the thing we've been most warning about is that we're deliberately trying to build AI systems that are much smarter than people and that exceed human capability,” he said.
“The implications of that are very extreme.”

He added that even if Anthropic appears to be showing extreme caution with Mythos, more regulatory guardrails must be enacted.

Anthropic and its chief executive Dario Amodei have been an exception on the Big Tech scene, having been outspoken in support of stricter regulations on AI development.

Mr Amodei and Anthropic have also been highly critical of chip exports to China, which they described as compromising the US's lead in the race for AI superiority.

Mr Aguirre acknowledged that Anthropic might be trying its best to be cautious, but added that the existence of Mythos shows that the company is pushing the world towards an AI inflection point.

“I like the people at Anthropic, I think I support a lot of their positions, but I fundamentally believe they're doing something that is bad for humanity,” he said.

Updated: April 29, 2026, 6:26 PM