AI Cyber Threat Series: AI Is Now a Weapon. Day One Proved It.
Summary/Title Text
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco.
Key Findings
- Mythos containment failed in hours, not months. On the same day Anthropic announced its controlled AI cyber tool, an unauthorized group on a private Discord chat gained access through a third-party contractor. The group was still using it two weeks later.
- Three supply chain breaches enabled the access. A poisoned open-source AI library (LiteLLM) led to a breach at an AI training vendor (Mercor), which exposed Anthropic’s internal naming conventions. None of the three breaches required sophisticated attackers.
- AI cyber capability is no longer single-vendor. OpenAI shipped GPT-5.4-Cyber, expanded its Trusted Access for Cyber program to thousands of defenders, and announced an autonomous AI security researcher (Aardvark) within two weeks of the Mythos announcement.
- The capability moat is the security pipeline, not the model. Independent researchers replicated key Mythos vulnerability findings using small, cheap, publicly available AI models when paired with the right scaffolding.
- AI tools are now first-class breach targets. AI gateways and libraries hold API keys for every AI service a company uses. One compromised AI dependency can expose an entire organization’s AI stack, including connections to proprietary data, code, and customer records.
- Behavioral threat hunting is the detection layer that holds up. The Mythos access chain exploited a contractor trust relationship, stolen reconnaissance data, and a URL guess. None of these trigger conventional EDR, SIEM, or DLP rules.
- Howler Cell hunts for AI risk as well as AI threats. Unsanctioned AI use, AI data leakage, exposed AI credentials, and rogue AI workflows are active risk identification targets, not secondary concerns.
What Happened
On April 7, 2026, Anthropic released Mythos, an AI model so capable at finding software security flaws that the company decided not to release it to the public. Instead, they gave a small group of major tech and security companies access through a carefully contained sandbox called Project Glasswing. The idea: let trusted defenders use the tool to find and fix bugs in critical software before bad actors could build something like it.
On the same day Mythos was released, an unauthorized group on a private Discord chat already had access to it. That was day one. This blog explains how that happened, what it means, and why your security stack now has gaps it did not have eleven days ago.
My first blog in this series, The Glasswing Window: Why the Mythos Release Should be on Every Security Leader's Radar, argued that the protection Glasswing offered was narrower than most people thought. Eleven days later, that argument is no longer theoretical, and the picture is worse than I expected. The containment around Mythos failed in hours. Other AI labs are racing to ship their own cyber-capable models. Criminal and nation-state groups are already using AI as part of live attack campaigns. AI is actively operating as a cyber weapon. Your defensive posture must start by understanding this new reality.
Figure 1: The eleven days between the original Glasswing analysis and today.

Day One: How Mythos Containment Failed
Bloomberg broke the story on April 21, 2026. Anthropic confirmed an investigation on April 22, 2026. As of this writing, the unauthorized group still has access.
Here is what happened, step by step:
- A small group on a private Discord chat hunts for unreleased AI models. One member of that group works at a company that does contracted work for Anthropic.
- The group used inside knowledge of how Anthropic names and stores its models to guess where Mythos was hosted online. That naming knowledge did not come from Anthropic directly.
- The insider knowledge came from another breach: a company called Mercor. Mercor sells AI training services to Anthropic, OpenAI, and Meta. In March 2026, Mercor was hacked and attackers stole 4 terabytes of data, including details about how Anthropic organizes its model files.
- Mercor was breached because of a third compromise: an open-source library called LiteLLM. LiteLLM is a popular tool that helps applications connect to AI models. It is downloaded 97 million times a month. A hacker group called TeamPCP planted malicious code in it that automatically ran every time the library loaded and silently stole credentials from any company that pulled the bad version. Every organization running that version unknowingly handed over access to their private AI data repositories, model endpoints, and stored API keys.
Threat Actor Profile: TeamPCP. The group behind the LiteLLM supply chain compromise that ultimately enabled unauthorized Mythos access.

- With Mythos access, the threat is concrete. An unauthorized actor using Mythos can autonomously scan target networks for vulnerabilities, write working exploits, and execute multi-stage attacks. Data at risk includes proprietary source code, internal credentials, infrastructure blueprints, and anything accessible through AI-integrated workflows.
Three breaches stack on top of each other. None required advanced skills. None required a sophisticated attacker. The bad version of LiteLLM was only live for 40 minutes. The Discord group made an educated guess at a URL. Yet that chain reached the most safety-conscious AI lab in the world and produced access to its most powerful cyber tool on day one.
Figure 2: The Mythos access chain, mapped to MITRE ATT&CK techniques and showing where behavioral hunting fills the detection gaps.

Why Third-Party and Supply Chain Risk Is the Defining Pattern of 2026
The specific incident matters. So does the pattern it represents. Third-party and supply chain risk is not new. What changed is the size of the attack surface and how fast it gets exploited.
Three structural realities are converging:
- A few vendors carry everyone. Modern software depends on a small number of common open-source components and a small number of major vendor platforms. Compromise one and you reach thousands of organizations.
- Trust gets inherited. Vendors and contractors operate inside the trust boundary set by the parent company. Their identities and behavior are rarely watched as closely as employees, but the damage from a compromise is just as bad.
- AI tools concentrate credentials. AI gateways and libraries hold API keys for every AI service a company uses, including connections to internal knowledge bases, proprietary code repositories, and customer data pipelines. One compromised AI tool can expose an entire organization’s AI stack at once.
The security firm Palo Alto Networks calls this pattern “Living off the AI Land.” Attackers are weaponizing AI tools and assistants the same way they have weaponized PowerShell and other built-in admin tools for years. AI tools sit inside trust boundaries. They process sensitive data. They run with elevated privileges. They are now first-class targets.
The numbers from CrowdStrike’s 2026 Global Threat Report make the velocity clear. AI-enabled adversary activity is up 89% year over year. The average time it takes attackers to move from first foothold to spreading inside a network has dropped to 29 minutes. The fastest observed was 27 seconds. 42% of vulnerabilities are now exploited before they are even publicly disclosed.
One of the groups driving that surge is FANCY BEAR (APT28), a Russian state-linked threat group attributed to GRU military intelligence. Active since at least 2004, they are responsible for some of the most consequential cyber operations on record, including the 2016 DNC breach and the 2022 “Nearest Neighbor” Wi-Fi lateral movement technique. Their 2025 introduction of LAMEHUG malware represents a structural change in how nation-state actors operate: instead of building custom reconnaissance tooling, they now route that work through public AI APIs.
Threat Actor Profile: FANCY BEAR (APT28). Russia’s GRU-attributed threat group, now using public AI models as operational attack infrastructure.

LAMEHUG is the direct expression of that shift. It makes live calls to public large language model APIs during an attack, sending natural-language queries to identify targets, collect and summarize documents, and craft context-aware phishing lures. Behavior adapts based on what the AI returns. There is no fixed script and no custom infrastructure. All activity appears as legitimate HTTPS traffic to known AI provider domains. Conventional detection does not see it.
Threat Spotlight: LAMEHUG malware (FANCY BEAR / APT28). Activity appears as legitimate HTTPS traffic to public AI API endpoints — conventional detection does not see it.

The Mythos access chain is not an outlier. It is the pattern most organizations should expect to see across their own environments in the next eighteen months.
Capability Is Proliferating, Not Concentrating
In the eleven days since the original blog, the AI cyber tool landscape has filled in fast. Three major AI labs are now shipping cyber-focused tools, each with a different strategy for who gets access and how.
Figure 3: The three major AI labs and their April 2026 cyber tool strategies.

Anthropic: restrict access to a small set of defenders
Project Glasswing remains a controlled program with twelve to fifty partner organizations and $100 million in usage credits. The premise: give major defenders a head start so they can harden critical software before the same capability reaches attackers.
-
- Mozilla’s Firefox 150 ships fixes for 271 vulnerabilities Mythos identified in a single evaluation pass
- Palo Alto Networks reported the equivalent of a year of penetration testing in less than three weeks
- Containment failed on launch day. An unauthorized group still has access. Assume they are not the only ones.
OpenAI: verify the user, then expand access
OpenAI chose a different path: scale access through identity verification rather than restrict it to a small group.
-
- Released GPT-5.4-Cyber, tuned for security work with lower refusals on sensitive defensive tasks
- Expanded Trusted Access for Cyber to thousands of verified defenders and hundreds of security teams
- Announced Aardvark, a fully autonomous AI agent that scans codebases and proposes patches
- Shipped GPT-5.5 broadly to all paid users on April 24
Google: integrate AI into the security stack directly
Google’s Big Sleep agent takes a third approach: build AI capability into existing security workflows rather than releasing a standalone tool.
-
- Built on a collaboration between DeepMind and Project Zero, Google’s elite vulnerability research team
- Has independently found 20+ open-source CVEs in production software
- Stopped a SQLite zero-day before it could be exploited in the wild, the first AI to achieve this milestone
What All Three Approaches Confirm
The strategic disagreements matter less than what all three approaches confirm together: AI cyber capability at this level is no longer gated or single-vendor. A security research firm called AISLE demonstrated that several of Mythos’s headline vulnerability findings can be detected by smaller, cheaper, openly available AI models when given the relevant code segment to examine. The caveat matters: AISLE’s tests worked on isolated code snippets, not by scanning entire codebases the way Mythos does. The gap is real, but it is narrower than the Glasswing announcement implied. The real moat is the security workflow built around the model, not the model itself. That holds regardless of model size.
OpenAI CEO Sam Altman called Anthropic’s Mythos rollout “fear-based marketing” on April 21. That characterization was contradicted by the news cycle running in parallel. Bloomberg had confirmed that day that an unauthorized group had been running Mythos continuously since launch. Mozilla shipped 271 bug fixes Mythos had found. The UK AI Security Institute confirmed Mythos can autonomously execute multi-stage network attacks. A model that adversarial groups will sustain unauthorized access to for two weeks is not marketing material. It is an operational asset.
The Parity Timeline Is Already Compressed
The original Glasswing blog cited Anthropic’s estimate that it would take six to eighteen months for equivalent capability to reach other actors. That estimate did not survive the first eleven days.
- OpenAI shipped a cyber-permissive model and an autonomous AI security researcher within two weeks. The capability is no longer single-vendor.
- Independent researchers reproduced key Mythos findings on small, openly available models. The capability does not require the largest AI labs’ training resources.
- A Discord group has had Mythos access for fourteen days. The capability is no longer behind a controlled-access program.
David Lindner, CISO at Contrast Security, put it plainly in Fortune: if a Discord group got access, China already has it. He was making a point about likelihood, not a forensic claim. The point stands. The right operating assumption is that the parity window has already closed, not that eighteen months remain.
Kemba Walden, the former Acting National Cyber Director of the United States, reinforced that position in a separate Fortune commentary on April 23. Mythos has demonstrated an 83% success rate writing working exploits on its first attempt. Its own internal testing documents an unexpected sandbox breakout where the model bypassed its safety guardrails. Walden’s warning is direct: the technical debt inside U.S. critical infrastructure is coming due, and small and mid-sized businesses and under-resourced state and local agencies are the least prepared to absorb it.
A compressed timeline means compressed response time. The question is not whether AI-capable attackers will reach your environment. It is whether your detection and hunting capability will see them when they do.
What This Means For Defenders
The Patching Problem Is About to Get Worse
Mythos identified 271 Firefox vulnerabilities in a single evaluation pass. Palo Alto Networks reported Mythos accomplished the equivalent of a year of penetration testing in under three weeks. Those are the results from one model, tested against one target, over a short window.
Scale that across every major AI lab now shipping cyber-capable tools, every criminal group running automated reconnaissance, and every nation-state actor with access to equivalent capability. The volume of newly discovered vulnerabilities is about to increase by an order of magnitude, and it is not going to slow down.
Most security teams cannot keep pace with their current patch backlog. The average organization already has more critical vulnerabilities than it can remediate in the time available before exploitation. AI-driven vulnerability discovery does not change that math gradually. It breaks it.
Three things follow:
- Prioritization becomes the critical skill. Not every vulnerability carries the same risk in your specific environment. A critical CVE in software you do not run is not your problem. A medium-severity finding in a system your AI vendor has privileged access to might be your worst day. Context determines priority. Generic severity scores do not.
- Automated and AI-guided patching programs are no longer optional. The patching lifecycle has to compress. Organizations that rely on manual review, change control windows measured in weeks, and human-approved deployments for every fix will fall further behind with every new vulnerability batch that drops. The conversation has shifted from whether to trust automated patching to how to implement it safely.
- Meridian accelerates both decisions. Knowing which vulnerabilities to prioritize requires knowing which assets are exposed, which vendors touch them, which controls are in place, and what the blast radius looks like if a specific system is compromised. Meridian provides that entity context across more than 500 integrations in real time. When a new critical patch drops, Meridian-fueled context tells you whether it applies to your environment, who owns the affected system, what depends on it, and whether you can patch automatically or need a human decision. That is the difference between a patching program that scales and one that collapses under volume.
The organizations that will handle the coming wave are the ones building patching infrastructure now that assumes AI-scale vulnerability discovery as the baseline, not the exception.
Behavioral Hunting Is the Detection Layer That Holds Up
Howler Cell is Cyderes’ elite threat services practice. It brings together senior threat hunters, DFIR investigators, threat intelligence analysts, and offensive security operators working as one team. Across hundreds of customer environments and multiple industries, the finding is consistent. Every security stack has gaps. The attacks that cause the most damage are the ones that fell through them.
Proactive hunting finds those attacks. Not because the tools failed, but because the tools worked exactly as designed. They were not designed for what the attacker did.
The Mythos access chain makes that concrete. Three reasons conventional detection misses it:
- No EDR or SIEM rule fires for an authorized contractor logging into a vendor environment. That is the design. The contractor is supposed to be there.
- No data loss prevention rule catches stolen data being used as reconnaissance for a future breach. The Mercor breach was Mercor’s problem in March. The consequences landed at Anthropic in April.
- No vulnerability scanner finds an attacker guessing a URL. Guessing a URL is not a CVE. It is operator tradecraft.
Howler Cell hunters have seen this pattern across financial services, healthcare, energy, and critical infrastructure: the breach that mattered was not the one the platform flagged. It was the one a hunter found by asking what normal looks like and why something does not match.
The AI layer changes the scope of that problem, not the nature of it. Unsanctioned AI use, credential exposure, rogue AI workflows, and AI-enabled reconnaissance do not have mature signature coverage. They require behavioral baselines and hypothesis-driven hunting. AI risk identification is a first-class hunting mission. That is what Howler Cell is built for.
Figure 4: Four categories of AI risk that customers increasingly need behavioral coverage for. None of these reliably trigger conventional EDR or DLP rules.

Threat hunting in this environment is risk identification work as much as it is threat detection. What is new is the breadth of risk surfaces that AI brings into scope. Hunting for unsanctioned AI use, AI data leakage, exposed AI credentials, and rogue AI-driven workflows matters as much as hunting for post-exploitation behavior. The path from one to the other is short, and the platform layer rarely sees either.
Threat intelligence is the input that makes hunting useful. The dual-fork model from the original blog still applies: one fork feeds high-fidelity SOC detections, the other feeds the hunt team for what standard telemetry will miss. In a Mythos-era environment, both forks now include AI-specific intelligence: dark web chatter about new AI attack frameworks, malicious open-source AI packages, supply chain incidents at AI vendors, and active campaigns weaponizing legitimate AI platforms.
DFIR is the third leg. When an AI-driven intrusion lands, the investigation requirements are different. Reverse engineering an AI-orchestrated attack chain requires expertise that combines malware analysis, prompt forensics, and traditional incident response. Howler Cell’s DFIR team operates around the clock, with expert investigators ready to respond the moment an incident is confirmed. That capability must be in place before the incident, not assembled during it.
Meridian: Why Context Determines Who Wins the AI-on-AI Race
Defending against AI-driven attacks will increasingly involve AI-driven defense. The quality of those defensive AI agents depends entirely on the context they have to work with. An AI agent triaging a flood of new zero-day vulnerabilities is only as accurate as the entity context it can see. Fragmented context produces fragmented answers, and the Mythos access chain makes that concrete.
In that chain, the questions that mattered for response speed were never just “what is vulnerable?” They were: which third party is involved, what access does that third party hold, what data did they pull, who else inherits that access, and what is the blast radius if it is misused. Without answers to all of those at once, response slows and fragments across teams.
Figure 5: How Meridian connects intelligence, identity, and asset data into a single context layer for AI security, hunting, and investigations.

Meridian, Cyderes’ entity fabric, addresses that gap directly. It connects identity, asset, access, and exposure data across more than 500 integrations into a single, continuously validated risk model. Every alert and every vulnerability finding is evaluated through verified entity context rather than static severity scores. When AI agents run for detection, prioritization, or response, they operate from a shared reality rather than partial signals. In a Mythos-era environment, that is the difference between a triage process that scales to AI-discovery volumes and one that fragments under them.
The race between AI attackers and AI defenders will be decided on context quality as much as on model capability. Defenders limited by fragmented context lose ground even when their models are capable. Meridian closes that gap across AI security, threat hunting, and DFIR.
The Window Is Smaller, Not Closed
Eleven days ago, I argued the window is open and will not stay open. The window is still open. It is also smaller than it was, and the rate of compression is faster than the original timeline assumed. Three things follow:
- AI dependencies belong in your asset inventory now. Open-source AI libraries, gateways, and orchestration tools need the same dependency hygiene as any other production-critical component. Pin your versions. Use cryptographic hashes. Watch outbound traffic from AI infrastructure, and build an inventory of which credentials live inside which AI tools.
- Third-party access paths need behavioral coverage. Vendor and contractor sessions are the unmonitored side of most environments. They are also the most likely path for AI-era intrusions. Hunting for privilege creep, anomalous session behavior, and cross-tenant access is no longer optional.
- Hunt for AI risk as well as AI threats. Unsanctioned AI use, prompt-injected internal assistants, exposed AI credentials, and rogue AI-driven workflows are risks before they become incidents. Identifying them inside a customer environment is part of the hunt mission, not adjacent to it.
Mythos containment failed in hours. That is what eleven days proved. The defensive question now is whether your stack is operational against this pattern before the next failure, and the next one will not be a Discord group.
Stay informed with Howler Cell
Receive the latest Howler Cell news and research directly to your inbox.
Optional featured resource text
Howler Cell has been tracking and investigating the new variant of MedusaLocker. MedusaLocker is a well-known ransomware family active since late 2019
Ready to close your security gaps?
To stay ahead of today’s relentless threatscape, you’ve got to close the gap between security strategy and execution. Cyderes helps you act fast, stay focused, and move your business forward.
