What Are Security Experts Saying About OpenAI’s GPT-5.4-Cyber?

  ICT, Rassegna Stampa, Security
image_pdfimage_print

Days after Anthropic unveiled Claude Mythos, OpenAI launched GPT-5.4-Cyber, a model optimized for defensive cybersecurity usage. Unlike Anthropic, which chose to limit the Mythos model to a select few partners, OpenAI is scaling access to its Trusted Access for Cyber (TAC) program to thousands of verified, individual defenders as well as hundreds of groups protecting critical infrastructure. 

OpenAI states, “Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods — such as strong KYC and identity verification — to guide who can access⁠ more advanced capabilities and automating these processes over time. Ultimately, we aim to make advanced defensive capabilities available to legitimate actors large and small, including those responsible for protecting critical infrastructure, public services, and the digital systems people depend on every day.” 

OpenAI says its intention is to learn by putting the model into the world and improving it over time. “As we better understand both their capabilities and risks, we update our models and safety systems accordingly,” the organization states. 

Security Leaders Weigh In

Tim Mackey, Head of Software Supply Chain Risk Strategy at Black Duck:

As each new cybersecurity focused AI model becomes available, there is one important item for teams to remember. Finding bugs is very different from fixing bugs. And while it’s great to hear that these cybersecurity models are being provided to select researchers to evaluate, unless those select teams work for your company, you’re at the mercy of any tuning performed based on their feedback. One thing is clear, AI cybersecurity is here to stay and will only become more powerful. Security leaders in organizations of all sizes need to take the Anthropic and OpenAI advancements as a call to action focused on where and how AI enabled cybersecurity will benefit their operations and scale to deal with AI enabled adversaries. 

Trey Ford, Chief Strategy and Trust Officer at Bugcrowd:

The race between OpenAI and Anthropic to arm defenders is real, and it matters. The bottleneck was never the AI model, it’s the program architecture that decides which findings get verified, which get triaged, and which actually get fixed before an attacker reverse-engineers the same patch. 

Two frontier models competing on access philosophy doesn’t solve a key problem: the human coordination layer that gives AI-discovered vulnerabilities a path to remediation. What OpenAI’s TAC expansion and Anthropic’s Glasswing both tell us is that AI-discovered vulnerabilities are outpacing the coordinated infrastructure built to remediate them. 

The next generation of security programs won’t be judged on which AI model they use to find vulnerabilities, they’ll be judged on whether they built the program architecture, researcher coordination, and triage capacity to close the gap between machine-speed discovery and human-speed remediation. That’s where the real competitive advantage in cyber defense gets built.

The OpenAI vs. Anthropic access debate is the wrong conversation for security leaders this week. Access philosophy (democratic scale versus controlled rollout) doesn’t change the structural reality. The time to exploit is now measured in hours. 

The CVE system wasn’t built for AI-discovery rates, attackers don’t need Mythos to find what Glasswing couldn’t patch.

The question every CISO should be asking isn’t which model they can access, it’s whether their program was designed to act on what those models find.

Ronald Lewis, Head of Cybersecurity Governance at Black Duck:

There is a notable divergence in how OpenAI and Anthropic have approached the release of AI models with cybersecurity relevant capabilities. OpenAI has largely followed a traditional security tool release pattern, where potentially dangerous capabilities are restricted to trusted operators. Access to its cyber focused model (GPT 5.4 for Cyber) is gated through the Trusted Access for Cyber (TAC) program, which emphasizes vetting, use case justification, and ongoing oversight, and is designed to limit both who can access the model and how it may be used.

Importantly, OpenAI’s models underpin a broad ecosystem of third party security products, many of which are already deployed in sensitive environments. This includes a growing litany of tools across vulnerability management, threat intelligence, incident response, and digital forensics, where AI is used to accelerate analysis rather than execute actions. In this sense, OpenAI’s TAC approach mirrors how advanced forensic platforms have historically been released — restricted to validated professionals, governed by contractual controls, and designed to augment expert judgment rather than replace it.

Anthropic, by contrast, released Mythos in a way that appeared comparatively unconstrained when viewed through the lens of how sensitive security tools — such as forensic analysis software — have traditionally been distributed. Rather than heavily limiting access, Anthropic’s approach places greater emphasis on model alignment and internal self restraint, aiming to limit what the model will choose to do rather than who is allowed to use it. This represents a deliberate departure from the conventional “dangerous tool to trusted operator” paradigm.

While Anthropic’s release strategy drew heightened scrutiny, particularly from policymakers and parts of the security community, it also reflects a different theory of risk management: that sufficiently aligned models, combined with institutional governance and partnerships such as Project Glasswing, can enable broad, high capability use without strict individual level access controls.

In stark contrast, OpenAI’s TAC framework reflects a more conservative, tool centric risk posture. It treats advanced cyber capabilities as regulated instruments, suitable for controlled deployment within professional workflows, much like forensic and investigative tooling, rather than as broadly accessible general purpose systems. The two approaches highlight a fundamental philosophical split: OpenAI prioritizes access restriction and operational oversight, while Anthropic prioritizes alignment, institutional trust, and capability preservation.

Marcus Fowler, CEO of Darktrace Federal: 

OpenAI’s latest work on scaling trusted access for cyber defense, including GPT-5.4-Cyber, is a positive step. Lowering barriers for legitimate security work and enabling more advanced defensive workflows helps put stronger capabilities in the hands of defenders. Expanding access to these kinds of tools, in a controlled way, can help organizations more quickly and effectively identify risk.

However, it’s important to keep developments like these in perspective. Some of the greatest challenges in cybersecurity today are not the identification or analysis of weak code. Most organizations are still constrained by the realities of remediation once an issue is discovered: patch development, testing, deployment, uptime requirements, and resource limitations. Faster or deeper analysis does not automatically translate to faster or more effective risk reduction. The gap between discovery and remediation continues to widen, and organizations are defending against far more than just software vulnerabilities, including identity compromise, misconfigurations, insider threats, and misuse of AI itself.

So, while these kinds of capabilities are a step forward, it remains to be seen how much they will fundamentally change the cybersecurity market. What is less likely to change is the need for strong cybersecurity hygiene and best practices within the network itself, like zero trust, and the need for strong detection, visibility, continuous monitoring, and the ability to respond and contain both known and unknown threats at speed.

https://www.securitymagazine.com/articles/102235-what-are-security-experts-saying-about-openais-gpt-54-cyber

Lascia un commento