top of page

Cyber Industry Reacts to Anthropic's Project Glasswing as AI Accelerates the Race to Secure Software

  • 28 minutes ago
  • 5 min read

A coalition of the world’s largest technology companies is betting that artificial intelligence can tip the balance in cybersecurity before attackers fully weaponize it.


Anthropic this week unveiled Project Glasswing, a coordinated effort with industry heavyweights including Amazon, Apple, Microsoft, Cisco, CrowdStrike, Broadcom, Palo Alto Networks, and the Linux Foundation. The initiative centers on a new AI system called Claude Mythos Preview, a model designed to uncover deeply embedded software vulnerabilities at a scale that traditional tools have failed to reach.


The launch signals a turning point in cybersecurity strategy. Rather than relying on human researchers and incremental automation, the industry is beginning to deploy frontier AI models capable of scanning, reasoning about, and validating flaws across vast codebases in parallel.


Early results suggest the shift could be significant. In testing, the model surfaced thousands of previously unknown vulnerabilities, including long-standing flaws in widely used open source systems. Some of those issues had persisted for more than a decade despite repeated analysis by developers and automated scanning tools.


Anthropic says all identified vulnerabilities have been disclosed and patched in coordination with maintainers. The company is committing up to $100 million in compute credits and additional funding to support remediation efforts, particularly within the open source ecosystem that underpins most modern infrastructure.


At the center of the initiative is a controlled-access model. Mythos Preview will not be released publicly. Instead, it is being distributed to a limited group of partners and critical infrastructure organizations, a decision shaped by growing concern over the dual-use nature of advanced AI.


“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the company said in a blog post. “Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”


AI Is Rewriting the Vulnerability Equation


Security experts say the project reflects a broader reality. AI is rapidly lowering the cost and time required to discover zero-day vulnerabilities, a trend that could benefit defenders or attackers depending on how access is controlled.


“Efficiently applying AI in complex topics with high volumes of data such as cybersecurity is no simple task,” said Melissa Ruzzi, director of AI at AppOmni. “Simply feeding untreated data directly into an LLM will most likely not provide the expected added value, even with the most sophisticated model, due to the intrinsic limitations of LLMs that are inherently non-deterministic and focused on language handling.”


Ruzzi emphasized that domain expertise remains critical. “Domain expertise combined with AI expertise is key for any AI application in security. The big challenge here is having expertise within each of the different security domains involved, such as identity security, endpoint security and cloud security.”


At the same time, the expansion of AI-driven development is creating a feedback loop. As developers use AI to generate more code, the total volume of potential vulnerabilities grows alongside it.


“You can also look at this from another angle: try using Claude to write some code and see how many bugs, or even new zero-days, it produces,” said Nick Mo, CEO and co-founder of Ridge Security Technology. “Claude Code is already making developers many times more productive than before, which means the number of potential vulnerabilities being introduced is also many times greater. It’s writing code and writing vulnerabilities at the same time. No wonder they’re rushing to get security companies involved first. Digging holes and filling them simultaneously, the question is just which side is faster."


Industrializing Zero-Day Discovery


Some researchers believe Project Glasswing represents the early stages of automated vulnerability discovery at industrial scale.


“Anthropic’s Claude Mythos Preview has effectively industrialized zero-day discovery, identifying over 500 high-severity vulnerabilities in core open-source software that escaped decades of human and automated scrutiny,” said Noelle Murata, senior security engineer at Xcape. “To manage this massive ‘vulnerability debt,’ Anthropic launched Project Glasswing, a restricted partnership with 40 tech giants like Microsoft and Apple to coordinate global patching.”


Murata pointed to the initiative’s funding model as a critical component. “By pledging $100 million in compute credits to open-source maintainers, the initiative aims to bridge the gap between AI-driven discovery and the human speed of remediation.”


Still, not everyone is convinced the claims match the reality. Some experts argue that the underlying capabilities may be overstated or rely heavily on pre-existing context.


“Anthropic has a reputation for exaggerating the capabilities of their models, especially around their ability to find novel vulnerabilities,” said Steven Swift, managing director at Suzu Labs. “The community is not being given access to the model at this time. That means it isn’t possible to audit big claims, and we’re left with Anthropic asking us to trust them.”


Swift noted that generating exploits is not the same as discovering new vulnerabilities. “If you provide any major LLM a sufficient detail of how an exploit works, it should be able to generate a functioning exploit. This is not new.”


A Narrow Window Before Offensive AI Spreads


Even with restricted access, the broader implications are hard to ignore. Security leaders increasingly expect that similar capabilities will reach adversaries in the near future.


“Mythos Preview signals that zero-day discovery is becoming cheaper, faster, and more scalable,” said Sunil Gottumukkala, CEO of Averlon. “Even with restricted access, the broader implication is clear: we should expect more dangerous vulnerabilities to be found across major software platforms, and many organizations still don’t patch fast enough to keep up.”


That gap between discovery and remediation may become the defining risk of the AI era. Once vulnerabilities are disclosed and patches released, attackers often reverse engineer fixes to develop exploits at scale.


Joshua Marpet, senior product security consultant at Finite State, said the pace of change is already outstripping traditional security operations.


“The speed of this evolution is staggering. Three years ago, LLMs barely wrote functional code. Today, they’re autonomously surfacing zero-days at scale,” he said. “We can no longer fight machine-speed threats with manual, point-in-time reviews. Defense must become as continuous and autonomous as the attacks coming our way.”


Marpet warned that future breakthroughs may not come with responsible disclosure. “The next leap in offensive AI will easily emerge from adversaries with zero intention of giving us a ‘head start.’”


Open Source Becomes the Front Line


A key focus of Project Glasswing is open source software, which forms the backbone of everything from cloud platforms to critical infrastructure systems. Despite its importance, much of that code is maintained by small teams with limited security resources.


“Open source software constitutes the vast majority of code in modern systems,” said Jim Zemlin, CEO of the Linux Foundation. “By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation.”


The success of the initiative may ultimately depend on whether collaboration can move as fast as the technology itself. Frontier AI systems are evolving on a monthly cadence, while software patching cycles and organizational coordination often lag far behind.


Anthropic acknowledges the challenge. “Project Glasswing is a starting point,” the company said. “No one organization can solve these cybersecurity problems alone.”


What is clear is that the cybersecurity landscape is entering a new phase. AI is no longer just a tool for defenders or attackers. It is becoming the terrain itself, where both sides compete at machine speed to find and fix the same flaws.


For now, Project Glasswing represents an attempt to ensure defenders get there first.

bottom of page