Claude's Hidden Performance Cut: What Users Found
Anthropic just did something no major AI lab has done before: publicly declared its own model too dangerous to release to the general public.
Claude Mythos, announced this week, is Anthropic’s most capable model to date. According to the company, it surpasses human experts at finding and exploiting software vulnerabilities. That capability is exactly why you can’t use it. Access is gated behind a closed enterprise program called Project Glasswing, and the model’s debut triggered an emergency government meeting with Treasury Secretary Scott Bessent, Federal Reserve Chair Jerome Powell, and the CEOs of the country’s biggest banks.
This is new territory. Not the “dangerous AI” framing — that’s been a marketing tool for years. What’s new is a company voluntarily restricting access to its own flagship product on safety grounds, with a credible explanation and verifiable real-world consequences.
Quick Summary: Anthropic Claude Mythos
Detail Info Model Claude Mythos (Preview) Announced April 7–10, 2026 Public Access No — restricted program only Why Restricted Expert-level software vulnerability exploitation Access Program Project Glasswing Usage Credits $100 million across partner organizations Pricing (Partners) $25/$125 per million input/output tokens Government Response Emergency meeting: Bessent, Powell, Wall Street bank CEOs Official Source anthropic.com/glasswing Bottom line: The first AI model publicly flagged by its own maker as too risky for general release — and the government took it seriously enough to call a room full of bank CEOs within days.
Claude Mythos is Anthropic’s newest and most capable AI model, positioned at the top of the company’s model family. Early benchmark analysis has suggested a parameter count in the 10-trillion range, though Anthropic has not officially confirmed that figure. What the company has confirmed is the capability that made the number irrelevant to the release decision.
According to Anthropic’s own documentation on Project Glasswing, Claude Mythos has “reached a level of coding capability where it can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” That’s a precise, specific claim from the company’s own materials — not a marketing headline. It’s the explanation Anthropic gave for why a general release wasn’t on the table.
The model isn’t theoretical. During a controlled preview period, Mythos identified thousands of high-severity vulnerabilities across production software, including a 27-year-old flaw in OpenBSD and critical vulnerabilities in every major operating system and browser. Those aren’t cherry-picked demos. They’re what the model produces at scale when pointed at real-world codebases.
Our Claude model review covers where Mythos sits in Anthropic’s broader model family.
Project Glasswing is Anthropic’s controlled access program for Claude Mythos Preview. The company made $100 million in usage credits available across the program’s partner organizations, which include twelve named entities: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks — plus over 40 additional organizations.
Early press coverage simplified this to three companies. The actual partner list is considerably broader and notably weighted toward cybersecurity and critical infrastructure: CrowdStrike, Palo Alto Networks, and Cisco alongside cloud hyperscalers and financial services. That composition isn’t accidental. Anthropic’s stated goal for Project Glasswing is using Claude Mythos to find and fix critical vulnerabilities in open-source software — before adversaries use equivalent models to find and exploit them.
For organizations inside the program, Mythos is available via Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry at $25 per million input tokens and $125 per million output tokens. Those prices are meaningfully higher than Anthropic’s current consumer-facing models, which tracks with the enterprise-only positioning.
Anthropic also committed $4 million in direct donations to open-source security organizations: $2.5 million to Alpha-Omega and OpenSSF via the Linux Foundation, and $1.5 million to the Apache Software Foundation. That philanthropic component matters to the framing — Project Glasswing is pitched as a security initiative, not just a premium AI tier.
According to Fortune, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with the CEOs of the country’s largest banks within days of the Claude Mythos announcement. The confirmed attendees included Jane Fraser of Citigroup, Ted Pick of Morgan Stanley, Brian Moynihan of Bank of America, Charlie Scharf of Wells Fargo, and David Solomon of Goldman Sachs. Jamie Dimon was invited but reportedly unable to attend.
The meeting’s stated purpose: the cybersecurity risks posed by AI models with Mythos-level vulnerability exploitation capabilities — specifically, the threat to financial system infrastructure.
Bloomberg separately reported that US regulators urged Wall Street banks to actively test Anthropic’s Mythos model as part of their own security evaluation processes. That’s a meaningful framing shift. The government didn’t respond by trying to block the model or restrict access further — it responded by encouraging regulated institutions to understand what they’re dealing with.
Whether that’s the right regulatory posture is an open question. But it signals something: officials view this as a preparedness problem, not primarily a prohibition problem.
AI labs have been calling their models “capable but responsible” for years. That framing has become background noise. What makes Mythos different isn’t the language Anthropic used — it’s what they did.
Voluntarily restricting a flagship model’s release is expensive. Anthropic is preparing for an IPO. Every model that doesn’t reach general availability is revenue that doesn’t materialize, hype that doesn’t build, and users who might choose a competitor that does ship something they can access. Restricting Mythos wasn’t the obvious commercial decision.
The vulnerability exploitation capabilities also explain why the concern is specific. General “AI safety” discourse often gestures at diffuse future risks: misalignment, deception, emergent behavior. Claude Mythos creates a concrete, near-term attack surface: an AI that can analyze real production systems, identify exploitable flaws, and do it faster and more comprehensively than human security researchers. That’s a capability with immediate applications for both defense and offense.
Anthropic’s decision to run controlled testing through defenders first — rather than general release — reflects a view that the offense/defense asymmetry matters enough to constrain their own commercial timeline. Our AI safety business guide covers how these constraints play out in enterprise deployments.
The OpenBSD finding is worth pausing on. OpenBSD is one of the most security-focused operating systems in existence. Its codebase is maintained by experienced security researchers who have been auditing it actively for decades. A 27-year-old vulnerability in OpenBSD isn’t a careless oversight — it’s the kind of subtle, deeply embedded flaw that evades expert review at human pace and scale.
Claude Mythos found it. Along with critical flaws in every major OS and browser.
That’s not meant to read as a scare statistic. It’s evidence that the capability claim is real. When Anthropic says the model can “surpass all but the most skilled humans” at finding vulnerabilities, this is what that looks like in practice. The model doesn’t just run faster pattern matching — it identifies classes of vulnerabilities that experienced human researchers actively missed.
The defensive implication is significant. If Mythos can find a 27-year-old flaw in OpenBSD’s hardened codebase, it can almost certainly find comparable flaws in less carefully audited software that runs banking infrastructure, hospital systems, and power grids. Project Glasswing’s defensive framing — get the defenders trained on this capability before adversaries deploy equivalent models — makes more sense viewed through that lens.
For the vast majority of organizations, Claude Mythos isn’t accessible. That’s by design.
The practical questions worth asking: What does this capability signal about the models that are publicly available? If Mythos-level vulnerability exploitation exists as a controlled capability today, how long before similar capabilities appear in open-weight models or less safety-focused competitors? And what does that mean for enterprise AI security posture?
TechCrunch’s April 7 coverage of the initial announcement noted that Anthropic’s framing positions this as a race between defenders and attackers — and that giving defenders a head start through Project Glasswing is the explicit goal. That’s a reasonable framing. It’s also a framing that depends on adversarial actors not independently developing equivalent capabilities in the meantime.
For security teams at organizations not in the Glasswing program, the near-term action items are less about Claude Mythos specifically and more about what it signals:
Our AI safety and privacy guide covers how these tools fit into enterprise security workflows.
I think Anthropic made the right call here — and I don’t say that reflexively.
The instinct in tech is to ship. To let users decide. To trust that the market will sort out misuse. That instinct fails when the capability is sufficiently dual-use and the attack surface is sufficiently large. Vulnerability exploitation at Claude Mythos scale doesn’t require an adversarial nation-state actor to cause serious harm. It requires one motivated person with API access and a target.
Restricting the model to vetted partners running defensive security programs, and pairing that restriction with $4 million in donations to the open-source security ecosystem, is the defensible version of this decision. It’s not perfect — the Project Glasswing partner list skews toward large incumbents, and the $100 million in credits primarily benefits organizations already in the room — but the alternative of a general release would have been indefensible.
The government meeting is the thing I’m watching most carefully. Treasury and the Fed convening bank CEOs over a single AI model’s capabilities is a new kind of event. It suggests regulators are taking AI-enabled systemic risk seriously enough to convene on it quickly rather than waiting for a regulatory framework to catch up. Whether that produces useful policy or mostly optics — that’s still unresolved.
What’s clear: the “AI safety is theoretical” era just got materially shorter.
Claude Mythos is Anthropic’s most capable AI model as of April 2026. The company has not released it publicly, citing its ability to identify and exploit software vulnerabilities at a level that surpasses all but the most expert human security researchers. Access is restricted to vetted partners through the Project Glasswing program.
Anthropic stated that Claude Mythos has “reached a level of coding capability where it can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” The company chose to restrict general access to prevent the model’s capabilities from being used offensively before defensive applications could be developed.
Project Glasswing is Anthropic’s controlled access initiative for Claude Mythos Preview. It includes 12 named partner organizations — including AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA — plus over 40 additional organizations. The program includes $100 million in usage credits and pricing of $25/$125 per million input/output tokens.
According to Fortune, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened the meeting. Confirmed attendees included bank CEOs Jane Fraser (Citigroup), Ted Pick (Morgan Stanley), Brian Moynihan (Bank of America), Charlie Scharf (Wells Fargo), and David Solomon (Goldman Sachs). Jamie Dimon was invited but couldn’t attend.
Not through general release. Claude Mythos Preview is only available through the Project Glasswing program to vetted partner organizations. Pricing for partners is $25 per million input tokens and $125 per million output tokens, accessible via Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
Claude Mythos sits above the Opus and Sonnet model families in Anthropic’s lineup. For a detailed look at Anthropic’s currently available models and how they stack up against competitors, see our Anthropic vs OpenAI comparison.
During the preview period, Claude Mythos identified thousands of high-severity vulnerabilities in real production software, including a 27-year-old vulnerability in OpenBSD and critical flaws across every major operating system and browser. These findings were the basis for Anthropic’s decision to restrict public access while using the model defensively through Project Glasswing.
Last updated: April 11, 2026. Sources: Anthropic Project Glasswing | Fortune: Bessent and Powell convened Wall Street CEOs | Fortune: What Anthropic’s too-dangerous model means for the AI race | Bloomberg: Bessent, Powell Summon Bank CEOs | TechCrunch: Anthropic Mythos preview
Related reading: Claude Review 2026 | AI Safety for Business | Anthropic vs OpenAI 2026