AI developer Anthropic says its latest Claude AI model is so powerful — and potentially dangerous — that it will not be available to the general public to use.
Dubbed Claude Mythos, the software is part of the Claude AI family, an artificial intelligence model that can act like a chatbot and AI assistant, like ChatGPT and Google’s Gemini.
“It is a frontier AI model, and has capabilities in many areas—including software engineering, reasoning, computer use, knowledge work, and assistance with research—that are substantially beyond those of any model we have previously trained,” Anthropic wrote in the preview’s system card.
The system card also states that Claude Mythos “has demonstrated powerful cybersecurity skills, which can be used for both defensive purposes (finding and fixing vulnerabilities in software code) and offensive purposes (designing sophisticated ways to exploit those vulnerabilities).”
It is those capabilities that made Anthropic decide to not release the software to the general public.
“Claude Mythos’s large increase in capabilities has led us to decide not to make it generally available. Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners.”
Anthropic cites these partners as “organizations that maintain important software infrastructure, under terms that restrict its uses to cybersecurity.”
It is these kinds of technologies that Branka Marijan, a senior researcher at Project Ploughshares, says should be monitored with caution.
“The implications for cybersecurity and broader national security that they are flagging, I don’t think that they’re hypotheticals,” she said. “I do think there are actual concerns that we should be paying more attention to now.”
Daniel Escott, the CEO of Formic AI, said that Anthropic is “choosing consciously” to not release Claude Mythos.
“Their argument against releasing it from the general public is that the same systems and functionality and capability to protect infrastructure using this AI system could equally be used to attack the same infrastructure,” he said.
“Anthropic is making their own choices on who they’re willing to give access to this system for. But at the same time, I would imagine those partners are probably saying ‘you’re only allowed to sell to us,’ perhaps a limited set of other entities, but they don’t want everyone to have access to the same kinds of technology,” he said.
“And if Anthropic isn’t going to sell it to them, someone else will develop it and sell it.”
Escott also warned that Anthropic’s system card on Claude Mythos should be taken “with a grain of salt.”

Get daily National news
Get daily Canada news delivered to your inbox so you’ll never miss the day’s top stories.
“Based on the documentation, it seems that they’ve been training this on a combination of the open-source data sets that they’d been using for all of Anthropic’s other models,” he said.
“This is no different than what ChatGPT or Microsoft Co-Pilot is doing, where they’re just scraping, some would argue stealing, information from all over the internet and putting it all into one big data set that they can train on.”
Marijan said she would like to see “more clarity from Anthropic and these other companies about actually how concerning is this from what they’re telling us.”
“It is absolutely concerning,” she said. “It’s undermining all of these safeguards that companies might have in place.”
Moshe Lander, an economics professor at Concordia University, said that not releasing Claude Mythos to the public just yet allows for potential flaws to be fixed without impacting users.
“If some pharmaceutical company is developing a drug, and they say, for the time being, ‘we’re not releasing it for public use,’ is there something wrong with that? I would say, actually, I think that’s probably being responsible,” he said.
“If the company is saying, ‘look, we’re not putting it into public use ever,’ that’s something different. What they’re saying is ‘we’re now putting it in public use now,’ I think that’s being extremely responsible, in let’s see how this thing is going to be used. Let’s see where its defects are,” he said.
“If they do find that there’s weaknesses, it has that ability to correct itself or fix any flaws, that might not be a bad thing.”
There remain significant questions around the world, including in Canada, around what it will take for governments to regulate AI and provide legal frameworks for its use.
Lander also said that initial concern about AI systems not being immediately released is bound to raise questions for many, with no easy answers.
“I think that because people are generally worried about AI in general, that when we hear there’s an AI product that’s coming along that’s not available for public use, we hit the panic button and say, ‘wait a second, something doesn’t sound right here,’” he said.
In January, the Canadian Centre for Cyber Security (Cyber Centre) released its ransomware threat outlook for 2025-27, stating that with the growth of AI, “these threats have become cheaper and faster to conduct and harder to detect.”
As a result, numerous Canadian organizations, businesses “regardless of size or sector,” and individuals are susceptible to ransomware attacks. However, “critical infrastructure and large corporations” were found to be the top targets for ransomware activities.
The report found that the reported number of ransomware incidents increased by an average of 26 per cent year over year from 2021 to 2024.
In addition, it was also found that the total recovery costs associated with cybersecurity incidents cost $1.2 billion in 2023, doubling the previous cost of $200 million from 2019 to 2021.
However, Marijan believes that there should be more protocol in place for businesses to utilize these tools.
“I think what it points to really is this clear gap in governance where we have companies that are deciding what they think is concerning. We should really have processes,” she said.
“So, we absolutely are in the space where these companies are deciding essentially what they think are concerns or flagging them. And there’s no process in place for this, for any guardrails really to appear.”

