Project Glasswing Explained for AI Engineers


The notion that cybersecurity would remain a purely human discipline has just been shattered. Last week, Anthropic announced something that should fundamentally change how every AI engineer thinks about security: their newest model, Claude Mythos Preview, discovered thousands of zero-day vulnerabilities across every major operating system and browser. Not through specialized training, but as an emergent capability from general improvements in coding and reasoning.

This is not a hypothetical future concern. Engineers at Anthropic with no formal security training asked Mythos to find remote code execution vulnerabilities overnight. They woke up to complete, working exploits.

AspectKey Point
What it isAnthropic’s most advanced AI model with exceptional security capabilities
Key findingThousands of zero-days found in every major OS and browser
Access modelRestricted to ~50 companies through Project Glasswing
Investment$100M in usage credits plus $4M to open source security

Why This Changes Everything for AI Engineers

Through building production AI systems, I have seen how quickly capabilities can emerge from general model improvements. Mythos Preview demonstrates this at a scale that demands attention. The model found a 27-year-old vulnerability in OpenBSD, an operating system renowned specifically for its security focus. It discovered a 16-year-old bug in FFmpeg that automated fuzzers had missed despite encountering that code path five million times.

The FreeBSD vulnerability, now tracked as CVE-2026-4747, allows unauthenticated attackers to gain complete root access on machines running NFS. Mythos found and exploited this autonomously. In one test, the model chained together four separate vulnerabilities to escape both renderer and operating system sandboxes in a web browser.

This matters for AI engineers because these capabilities emerged without explicit security training. Anthropic explicitly stated they did not train Mythos for cybersecurity. The abilities appeared as downstream consequences of better code understanding and reasoning. If you are building AI agents or any system that writes, analyzes, or executes code, the security implications of improved capabilities are now impossible to ignore.

What Project Glasswing Actually Does

Anthropic recognized they could not simply release this model publicly. Project Glasswing is their solution: a controlled initiative that gives defenders a head start before models with similar capabilities become broadly available.

The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Another 40 organizations maintaining critical software infrastructure also have access. Together, they are using Mythos defensively to find and patch vulnerabilities before malicious actors can exploit them.

Anthropic committed $100 million in usage credits for Glasswing participants. They also donated $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, plus $1.5 million to the Apache Software Foundation. The model is available through Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry at $25 per million input tokens and $125 per million output tokens.

The 90-day timeline is critical. Anthropic committed to publishing findings and developing practical security recommendations within three months. This includes vulnerability disclosure processes and supply chain security protocols that every AI engineer deploying production systems will need to understand.

The Democratization Problem

Here is the uncomfortable reality that every AI security engineer must now confront. The same capabilities that make Mythos exceptional at finding vulnerabilities also make it exceptional at exploiting them. And this capability emerged from general model improvements that every frontier AI lab is pursuing.

Simon Willison observed that security researchers like Greg Kroah-Hartman and Daniel Stenberg have noted a dramatic shift. AI-generated vulnerability reports moved from obviously wrong to genuinely useful. Stenberg now spends hours per day handling AI-discovered vulnerabilities. This is already happening with current public models. Mythos represents where all models are heading.

Some researchers point out that portions of what Mythos can do may already be achievable with smaller, openly available models. The capability gap between restricted and public models may not last long. Multiple analysts expect competing labs, including those in other countries, to release models with comparable abilities within months.

Warning: The asymmetry between offense and defense is about to get worse. Defenders must patch every vulnerability. Attackers only need to find one. When AI accelerates vulnerability discovery for both sides, the advantage tilts toward attackers unless defenders have significant head starts. Project Glasswing is attempting to provide exactly that.

What This Means for Your AI Engineering Practice

If you are building production AI systems, several immediate implications deserve attention.

First, code execution capabilities in your AI systems now carry different risk profiles. Any agent that can write, modify, or execute code has theoretical access to vulnerability discovery and exploitation capabilities as models improve. The OWASP AI Agent Security Top 10 for 2026 already lists untrusted code execution as the primary risk facing AI systems.

Second, the security surface of AI applications extends beyond traditional software concerns. Your model choice, prompt architecture, and tool access configurations all affect what security capabilities your system might inadvertently enable. Understanding AI agent tool integration becomes a security consideration, not just a capability one.

Third, defensive AI security is now a career specialty with urgent demand. The gap between AI capabilities and security practices has never been wider. Organizations rushing to deploy AI agents have almost no qualified staff to assess the security implications.

The Path Forward for AI Engineers

This development reinforces something I have observed throughout my career: implementation skills matter more than ever when the stakes increase. Understanding how AI systems actually work, what capabilities emerge from architectural choices, and how to deploy them safely is no longer optional knowledge.

For AI engineers specifically:

  • Audit your agent architectures. Understand what code execution, file access, and network capabilities your systems expose. Sandbox aggressively where agent security permits.

  • Monitor capability emergence. As models improve through fine-tuning or upgrades, security capabilities may emerge that you did not explicitly enable. Regular capability assessments should become standard practice.

  • Follow the Glasswing disclosures. The next 90 days will produce concrete guidance on vulnerability disclosure and security protocols for AI systems. This information will shape best practices for years.

  • Consider security specialization. The intersection of AI engineering and security is wildly undersupplied. If you have both skill sets, or are willing to develop them, the career premium is substantial.

Frequently Asked Questions

Will Claude Mythos be released publicly?

Not currently. Anthropic restricted access specifically because the model’s security capabilities are too dangerous for broad release. Access is limited to Project Glasswing participants: roughly 50 organizations working on defensive security.

Does this affect other Claude models?

Mythos Preview is a separate model from the publicly available Claude Opus 4.6 and Sonnet 4.6. However, the security capabilities emerged from general improvements in coding and reasoning. Future model updates across the Claude family may develop similar capabilities.

How should AI engineers prepare for this shift?

Focus on understanding both offensive and defensive security concepts. Audit your AI systems for code execution capabilities. Follow the Glasswing disclosures over the next 90 days. Consider whether security specialization aligns with your career goals.

Sources

To see how AI engineering fundamentals apply to building secure systems, watch the full video tutorial on YouTube.

If you are serious about building production AI systems that are both capable and secure, join the AI Engineering community where members follow 25+ hours of exclusive AI courses, get weekly live coaching, and work toward $200K+ AI careers.

Inside the community, you will find practitioners navigating these exact security challenges as they build and deploy real AI applications.

Zen van Riel

Zen van Riel

Senior AI Engineer | Ex-Microsoft, Ex-GitHub

I went from a $500/month internship to Senior AI Engineer. Now I teach 30,000+ engineers on YouTube and coach engineers toward $200K+ AI careers in the AI Engineering community.

Blog last updated