Project Glasswing Highlights Risk AI Poses To Security Software Across Multiple Core Applications.
Project Glasswing Highlights Risk AI Poses To Security Software – Security systems and critical software infrastructure are increasingly exposed to AI-powered threat actors capable of identifying and exploiting vulnerabilities at scale and security integrators and security managers should be paying attention.
This risk has been exposed by a new initiative, Project Glasswing, launched to address the risk AI poses to the frailties of existing systems. The project brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks. The project is focused on securing critical software using advanced AI capabilities.
The initiative follows testing of a new unreleased frontier model from Anthropic called Claude Mythos Preview, which demonstrates a significant shift in cybersecurity risk. The model can identify and exploit software vulnerabilities at a level exceeding most human experts.
Mythos Preview has already identified thousands of high-severity vulnerabilities, including issues affecting major operating systems and web browsers. In multiple cases, the model identified vulnerabilities that had remained undetected for decades despite extensive human review and automated testing.
Examples include a 27-year-old vulnerability in OpenBSD enabling remote system crashes, a 16-year-old flaw in FFmpeg missed after millions of automated test executions, and a chain of vulnerabilities in the Linux kernel allowing escalation to full system control. Many of these vulnerabilities were identified and exploited autonomously without human input.
The capability reflects a broader shift in cybersecurity, where the time between vulnerability discovery and exploitation is compressing rapidly. Tasks that previously required specialist expertise can now be automated, increasing the likelihood of more frequent and more sophisticated attacks.
Project Glasswing Highlights Risk AI Poses To Security Software
Project Glasswing is intended to apply these capabilities defensively. Participating organisations will use Mythos Preview to identify and remediate vulnerabilities across both proprietary and open source systems. Anthropic will share findings across the industry and has committed up to $US100 million in usage credits for the model, along with $4 million in direct funding to open source security organisations.
Access has also been extended to more than 40 additional organisations responsible for critical software infrastructure, enabling broader scanning and remediation efforts.
The initiative reflects the scale of the challenge. Modern software underpins security, automation, banking, healthcare, energy, logistics and government systems, and has always contained vulnerabilities. However, AI is reducing the cost and expertise required to find and exploit these flaws, increasing exposure across sectors. At the same time, the same AI capabilities offer defensive advantages. Models such as Mythos Preview can be used to identify vulnerabilities earlier, improve code quality and reduce the number of exploitable flaws in new software.
Project Glasswing is an initial step in establishing a coordinated, industry-wide response, with further collaboration expected across developers, security researchers, infrastructure providers and governments. You can learn more about Project Glasswing here or read more SEN news here.
“Project Glasswing Highlights Risk AI Poses To Security Software.”











