Mythos within the arms of attackers threatens a storm past the ability of safety groups to climate. Claude Safety is designed to counter this.
Anthropic’s Mythos AI mannequin won’t be the one frontier mannequin in a position to compress the time-to-exploit to a meaningless variety of minutes. Different basis mannequin builders will produce their very own fashions with comparable capabilities – and these fashions will discover their means into the arms of criminals and nation state adversaries.
With out their very own specialised AI defensive techniques, defenders will probably be overrun. So, Anthropic has now launched Claude Safety. “Claude Safety is now out there in public beta to Claude Enterprise prospects,” it blogged, April 30, 2026.
Claude Safety might be accessed from the Claude.ai sidebar or at claude.ai/safety. It really works with Claude Opus 4.7 and requires no API integration or customized agent construct. Customers can choose certainly one of their repositories (or a particular listing or department) and begin a scan. It seeks out vulnerabilities, explains its findings, offers confidence info on the severity of the vulnerability and the way it may be reproduced – and generates directions for a focused patch (which might be labored by means of with Claude Code on the Net).
Claude Code has been battle-tested over the previous couple of months. Anthropic realized that safety groups need excessive confidence within the alert (that’s, no extra false positives), which is supplied by the product’s confidence ranking. They need minimal time from scan to repair (Claude Safety is already lowering days of forwards and backwards between the safety staff and the engineers to a single sitting). And groups need ongoing protection quite than one-off audits. So Anthropic has added an choice to schedule scans, permitting a daily cadence round reviewing and performing on findings.
It’s, says Anthropic, “A part of our broader push to place frontier capabilities in defenders’ arms.” One other side of that is working intently with main cybersecurity kinds. “CrowdStrike, Microsoft Safety, Palo Alto Networks, SentinelOne, Pattern.ai, and Wiz are integrating Opus 4.7’s capabilities into the safety platforms that enterprises already run on at present.”
Different corporations (Accenture, BCG, Deloitte, Infosys, and PwC) are working to deploy Claude-integrated options for vulnerability administration, safe code assessment, and incident response packages. Safety groups will be capable to proceed how they already work, however with far higher velocity and effectivity.
“Collectively we’re serving to our shoppers shut the essential hole between menace discovery and remediation,” feedback Adnan Amjad, companion and US cyber chief at Deloitte & Touche.
“Along with Anthropic, we convey a extra complete view of shoppers’ safety posture… [and] … will generate richer insights, quicker, serving to organizations defend their environments whereas deploying AI at velocity,” says Vanessa Lyon, an MD and senior companion at BCG.
“This isn’t AI merely augmenting safety,” provides Satish H.C., EVP & chief supply workplace at Infosys; “it’s AI redefining how enterprises defend themselves.”
Claude Safety is obtainable now to Claude Enterprise prospects, and will probably be out there to Claude Staff and Max prospects within the close to future.
Associated: Anthropic Unveils ‘Claude Mythos’ – A Cybersecurity Breakthrough That May Additionally Supercharge Assaults
Associated: The Mythos Second: Enterprises Should Battle Brokers with Brokers
Associated: Claude Mythos Finds 271 Firefox Vulnerabilities
Associated: OpenAI Widens Entry to Cybersecurity Mannequin After Anthropic’s Mythos Reveal






