OpenAI has launched GPT-5.4-Cyber, a brand new model of its flagship AI. This launch comes only one week after Anthropic’s new AI mannequin Mythos (Claude Mythos) rollout. The brand new mannequin is a variant of the primary GPT-5.4 system, particularly optimized for defensive cybersecurity use circumstances.
Based on OpenAI’s announcement, the objective is to supply higher instruments for community and system defenders “accountable for conserving programs, knowledge, and customers protected,” in order that they’ll “discover and repair issues quicker.”
Scaling the Protection Program
Together with the brand new mannequin, OpenAI is increasing its Trusted Entry for Cyber (TAC) program, which first launched in February 2026. This mission is now obtainable to hundreds of authenticated particular person defenders and lots of of groups accountable for securing essential infrastructure. By offering these professionals with extra superior capabilities, the corporate needs to assist them keep forward of menace actors who’re additionally experimenting with AI.
An integral a part of this technique is the Codex Safety instrument, which moved into analysis preview earlier in 2026. OpenAI claims this technique has helped determine and patch over 3,000 essential and high-severity vulnerabilities. It does so by monitoring codebases and suggesting fixes earlier than they are often exploited in a cyberattack.
New Technical Capabilities and Entry
The GPT-5.4-Cyber mannequin introduces a much-talked-about new characteristic known as binary reverse engineering. It’s particularly designed for safety specialists to assist them analyse compiled software program to seek out malware and vulnerabilities, even when they don’t have the supply code. This characteristic has been below growth from GPT-5.2 by means of GPT-5.3-Codex earlier than its official launch now.
“Prospects within the highest tiers will get entry to GPT‑5.4‑Cyber, a mannequin purposely fine-tuned for extra cyber capabilities and with fewer functionality restrictions. It is a model of GPT‑5.4, which lowers the refusal boundary for authentic cybersecurity work and permits new capabilities for superior defensive workflows, together with binary reverse engineering capabilities that allow safety professionals to research compiled software program for malware potential, vulnerabilities, and safety robustness without having entry to its supply code,” the corporate defined in an in depth announcement submit.
GPT-5.4-Cyber is extra permissive for safety duties, which is why OpenAI just isn’t permitting unrestricted entry, and to make use of probably the most superior options, customers should first confirm their id. The corporate makes use of an authentication course of to make sure its software program is utilized by authentic safety professionals and never hackers or espionage actors.
Defenders/particular person customers want to enroll to confirm themselves at chatgpt.com/cyber, whereas enterprises can request entry by means of their official OpenAI representatives. OpenAI notes that whereas vulnerabilities in digital programs have existed for years, these new instruments might help authentic actors defend public companies and important infrastructure extra profoundly.
OpenAI plans to proceed updating these defensive fashions all through 2026. They consider that as AI capabilities develop, the instruments used for system defence should even be improved too for enhancing system resilience and maintain digital environments safe.
Trade Knowledgeable Reactions
A number of business specialists shared their views on this announcement with Hackread.com, noting each the advantages and the remaining hurdles for the sector.
Marcus Fowler, CEO of Darktrace Federal, known as the transfer a optimistic step however warned concerning the actuality of fixing bugs. He acknowledged, “Most organisations are nonetheless constrained by the realities of remediation as soon as a difficulty is found: patch growth, testing, deployment, uptime necessities, and useful resource limitations. Sooner or deeper evaluation doesn’t robotically translate to quicker or more practical danger discount.”
Ronald Lewis from Black Duck highlighted the completely different kinds utilized by the 2 tech giants. He famous, “OpenAI’s TAC framework displays a extra conservative, tool-centric danger posture. It treats superior cyber capabilities as regulated devices, appropriate for managed deployment inside skilled workflows.” This stands in distinction to Anthropic’s method, which focuses extra on how the mannequin behaves relatively than who’s allowed to make use of it.
Picture by BoliviaInteligente on Unsplash






